最近的隐私泄漏事件和更严格的政策法规要求公司和移动应用程序的合规标准更高。但是,此类义务还在应用程序开发人员遵守包含各种观点,活动和角色的这些法规方面面临重大挑战,尤其是对于在此问题或资源有限的小型公司和开发人员中。为了解决这些障碍,我们开发了一个自动工具NL2GDPR,该工具可以从开发人员的自然语言描述中制定策略,同时还可以确保该应用程序的功能符合通用数据保护法规(GDPR)。 NL2GDPR是通过利用由百度认知计算实验室开发的信息提取工具OIA(开放信息注释)开发的。核心,NL2GDPR是一个以隐私为中心的信息提取模型,附有GDPR策略查找器和策略生成器。我们进行一项全面的研究,以掌握提取以隐私为中心的信息和制定隐私政策的挑战,同时利用针对此特定任务的优化。借助NL2GDPR,我们可以在正确识别与个人数据存储,过程和共享类型相关的GDPR策略方面获得92.9%,95.2%和98.4%的精度。据我们所知,NL2GDPR是第一个允许开发人员自动生成GDPR策略的工具,只需要输入自然语言来描述应用程序功能。请注意,其他非GDPR相关功能可能与生成的功能集成在一起,以构建复杂的应用程序。
translated by 谷歌翻译
视觉变压器(VITS)具有与卷积神经网络相比,具有较小的感应偏置的根本不同的结构。随着绩效的提高,VIT的安全性和鲁棒性也非常重要。与许多最近利用VIT反对对抗性例子的鲁棒性的作品相反,本文调查了代表性的病因攻击,即后门。我们首先检查了VIT对各种后门攻击的脆弱性,发现VIT也很容易受到现有攻击的影响。但是,我们观察到,VIT的清洁数据准确性和后门攻击成功率在位置编码之前对补丁转换做出了明显的反应。然后,根据这一发现,我们为VIT提出了一种通过补丁处理来捍卫基于补丁的触发后门攻击的有效方法。在包括CIFAR10,GTSRB和Tinyimagenet在内的几个基准数据集上评估了这些表演,这些数据表明,该拟议的新颖防御在减轻VIT的后门攻击方面非常成功。据我们所知,本文提出了第一个防御性策略,该策略利用了反对后门攻击的VIT的独特特征。
translated by 谷歌翻译
在更广泛的地球科学中对人工智能模型的常见需求是表示和编码各种类型的空间数据,例如点(例如,兴趣点),折线(例如,轨迹),多边形(例如,行政区域),图(例如,运输网络)或栅格(例如,遥感图像),隐藏的嵌入空间中,使得它们可以容易地结合到深度学习模型中。一个基本步骤是将单个点位置编码为嵌入空间,使得该嵌入对下游机器学习模型(例如支持向量机和神经网络)进行学习友好。我们调用此过程位置编码。但是,对位置编码的概念,其潜在应用以及需要解决的关键挑战缺乏系统审查。本文旨在填补这一差距。我们首先提供了一个正式的编码定义,并讨论了从机器学习角度从机械研究编码的必要性。接下来,我们提供关于当前地点景观研究的全面调查和讨论。我们根据其输入和编码方法将位置编码模型分类为不同类别,并基于它们是参数,多尺度,距离保存和方向意识的方式进行比较。我们证明现有的位置编码模型可以在共享配方框架下统一。我们还讨论了不同类型的空间数据的位置编码的应用。最后,我们指出了在未来需要解决的研究中的几个挑战。
translated by 谷歌翻译
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH). However, most current quality assessment research focuses on evaluating static 3D models and usually ignores motion distortions. Therefore, in this paper, we construct a large-scale dynamic digital human quality assessment (DDH-QA) database with diverse motion content as well as multiple distortions to comprehensively study the perceptual quality of DDHs. Both model-based distortion (noise, compression) and motion-based distortion (binding error, motion unnaturalness) are taken into consideration. Ten types of common motion are employed to drive the DDHs and a total of 800 DDHs are generated in the end. Afterward, we render the video sequences of the distorted DDHs as the evaluation media and carry out a well-controlled subjective experiment. Then a benchmark experiment is conducted with the state-of-the-art video quality assessment (VQA) methods and the experimental results show that existing VQA methods are limited in assessing the perceptual loss of DDHs. The database will be made publicly available to facilitate future research.
translated by 谷歌翻译
Time series anomaly detection strives to uncover potential abnormal behaviors and patterns from temporal data, and has fundamental significance in diverse application scenarios. Constructing an effective detection model usually requires adequate training data stored in a centralized manner, however, this requirement sometimes could not be satisfied in realistic scenarios. As a prevailing approach to address the above problem, federated learning has demonstrated its power to cooperate with the distributed data available while protecting the privacy of data providers. However, it is still unclear that how existing time series anomaly detection algorithms perform with decentralized data storage and privacy protection through federated learning. To study this, we conduct a federated time series anomaly detection benchmark, named FedTADBench, which involves five representative time series anomaly detection algorithms and four popular federated learning methods. We would like to answer the following questions: (1)How is the performance of time series anomaly detection algorithms when meeting federated learning? (2) Which federated learning method is the most appropriate one for time series anomaly detection? (3) How do federated time series anomaly detection approaches perform on different partitions of data in clients? Numbers of results as well as corresponding analysis are provided from extensive experiments with various settings. The source code of our benchmark is publicly available at https://github.com/fanxingliu2020/FedTADBench.
translated by 谷歌翻译
The opaqueness of the multi-hop fact verification model imposes imperative requirements for explainability. One feasible way is to extract rationales, a subset of inputs, where the performance of prediction drops dramatically when being removed. Though being explainable, most rationale extraction methods for multi-hop fact verification explore the semantic information within each piece of evidence individually, while ignoring the topological information interaction among different pieces of evidence. Intuitively, a faithful rationale bears complementary information being able to extract other rationales through the multi-hop reasoning process. To tackle such disadvantages, we cast explainable multi-hop fact verification as subgraph extraction, which can be solved based on graph convolutional network (GCN) with salience-aware graph learning. In specific, GCN is utilized to incorporate the topological interaction information among multiple pieces of evidence for learning evidence representation. Meanwhile, to alleviate the influence of noisy evidence, the salience-aware graph perturbation is induced into the message passing of GCN. Moreover, the multi-task model with three diagnostic properties of rationale is elaborately designed to improve the quality of an explanation without any explicit annotations. Experimental results on the FEVEROUS benchmark show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.
translated by 谷歌翻译
Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog. However, they sometimes generate unsupported or misleading content. A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for attribution to external evidence. To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically finds attribution for the output of any text generation model and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible. When applied to the output of several state-of-the-art LMs on a diverse set of generation tasks, we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models. Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.
translated by 谷歌翻译
与常规深层神经网络(DNN)相比,衍射光学神经网络(DONNS)在功率效率,并行性和计算速度方面具有显着优势,因此引起了很多关注,这些神经网络(DNN)在数字平台上实现时具有内在的限制。但是,反相反的算法训练的物理模型参数上具有离散值的现实世界光学设备是一个非平凡的任务,因为现有的光学设备具有非统一的离散级别和非单调属性。这项工作提出了一个新颖的设备对系统硬件软件代码框架,该框架可以对Donns W.R.T的有效物理意识培训进行跨层的任意实验测量的光学设备。具体而言,使用Gumbel-SoftMax来启用从现实世界设备参数的可区分映射到Donns的正向函数,在Donn中,Donn中的物理参数可以通过简单地最小化ML任务的损耗函数来训练。结果表明,我们提出的框架比传统的基于基于量化的方法具有显着优势,尤其是使用低精确的光学设备。最后,在低精度设置中,通过物理实验光学系统对所提出的算法进行了充分的验证。
translated by 谷歌翻译
在过去的十年中,数字人类吸引了越来越多的研究兴趣,而这些人的代表,渲染和动画已经付出了很大的努力。但是,数字人类的质量评估已落后。因此,为了应对数字人类质量评估问题的挑战,我们提出了第一个用于扫描数字人头(DHHS)的大规模质量评估数据库。构造的数据库由55个参考DHHS和1,540个扭曲的DHHS以及主观评分组成。然后,提出了一种简单而有效的全参考(FR)基于投影的方法。预处理的SWIN变压器微小用于分层提取,并将多头注意模块用于特征融合。实验结果表明,所提出的方法在主流FR指标中表现出最先进的表现。该工作中介绍的数据库和方法将公开可用。
translated by 谷歌翻译
我们提出了一个健壮而快速的捆绑调整解决方案,该解决方案估计了基于滚动快门(RS)摄像头的测量值的摄像机的6多杆姿势和环境的几何形状。这解决了现有作品中的挑战,即依靠其他传感器,高帧速率视频作为输入,对摄像机运动的限制性假设,读出方向和效率低下。为此,我们首先研究了标准化对图像点对RSBA性能的影响,并在建模真正的6-DOF相机运动时显示了更好的近似值。然后,我们为视觉残差协方差提出了一个新的分析模型,该模型可用于在优化过程中标准化再投影误差,从而提高了整体准确性。更重要的是,RSBA(NW-RSBA)中归一化和协方差标准化加权的组合可以避免常见的平面退化,而无需限制拍摄方式。此外,我们根据其Jacobian Matrix和Schur补充的稀疏性提出了NW-RSBA的加速策略。广泛的合成和真实数据实验验证了拟议解决方案对最新作品的有效性和效率。我们还证明了所提出的方法可以轻松实施,并作为已完成的RSSFM和RSSLAM解决方案插入著名的GSSFM和GSSLAM系统。
translated by 谷歌翻译